high-risk application
Federated Learning Priorities Under the European Union Artificial Intelligence Act
Woisetschläger, Herbert, Erben, Alexander, Marino, Bill, Wang, Shiqiang, Lane, Nicholas D., Mayer, Ruben, Jacobsen, Hans-Arno
The age of AI regulation is upon us, with the European Union Artificial Intelligence Act (AI Act) leading the way. Our key inquiry is how this will affect Federated Learning (FL), whose starting point of prioritizing data privacy while performing ML fundamentally differs from that of centralized learning. We believe the AI Act and future regulations could be the missing catalyst that pushes FL toward mainstream adoption. However, this can only occur if the FL community reprioritizes its research focus. In our position paper, we perform a first-of-its-kind interdisciplinary analysis (legal and ML) of the impact the AI Act may have on FL and make a series of observations supporting our primary position through quantitative and qualitative analysis. We explore data governance issues and the concern for privacy. We establish new challenges regarding performance and energy efficiency within lifecycle monitoring. Taken together, our analysis suggests there is a sizable opportunity for FL to become a crucial component of AI Act-compliant ML systems and for the new regulation to drive the adoption of FL techniques in general. Most noteworthy are the opportunities to defend against data bias and enhance private and secure computation
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (3 more...)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (1.00)
DOMINO: Domain-aware Loss for Deep Learning Calibration
Stolte, Skylar E., Volle, Kyle, Indahlastari, Aprinda, Albizu, Alejandro, Woods, Adam J., Brink, Kevin, Hale, Matthew, Fang, Ruogu
Department of Electrical and Computer Engineering, Herbert Wertheim College of Engineering, UF, USA Abstract Deep learning has achieved the state-of-the-art performance across medical imaging tasks; however, model calibration is often not considered. Uncalibrated models are potentially dangerous in high-risk applications since the user does not know when they will fail. Therefore, this paper proposes a novel domain-aware loss function to calibrate deep learning models. The proposed loss function applies a class-wise penalty based on the similarity between classes within a given target domain. Thus, the approach improves the calibration while also ensuring that the model makes less risky errors even when incorrect.
- North America > United States (0.90)
- Europe > Switzerland (0.04)
- Government (0.94)
- Health & Medicine > Therapeutic Area (0.94)
- Health & Medicine > Diagnostic Medicine > Imaging (0.91)
Ebook: O'Reilly: "Machine Learning for High-Risk Applications: Techniques for Responsible AI"
Understanding machine learning (ML) systems is a critical task for data scientists and non-technical profiles alike as organizations aim to integrate AI applications on an enterprise-wide level. In this ebook, we explore practices to identify cutting-edge and responsible strategies for managing high-impact AI systems and work to understand the concepts and techniques of model interpretability and explainability.
UK Lays Out Regulatory Model For Artificial Intelligence - AI Summary
The British approach to regulation focuses on high-risk applications, setting aside low risks associated with AI so that innovation will not be hampered, and the industry not burdened with red tape. Unlike the EU approach, where the enforcement of the AI Act will be handed down to a single national regulator for each member state, the UK is planning to give responsibility to a range of them. The principles laid out in the British approach "provide clear steers for regulators, but will not necessarily translate into mandatory obligations", the policy statement warns, encouraging them to "consider lighter touch options in the first instance" instead. London recognises the "inherent cross-border nature of the digital ecosystem" and stresses the need to work "closely with partners" to avoid fragmenting the global market, "ensure interoperability and promote the responsible development of AI internationally". Stakeholders in the AI ecosystem are invited to share their views by the end of September about this regulatory approach to inform a forthcoming White Paper on the implementation of such a strategy.
Leading MEPs raise the curtain on draft AI rules
The two European Parliament co-rapporteurs finalised the Artificial Intelligence (AI) draft report on Monday (11 April), covering where they have found common ground. The most controversial issues have been pushed further down the line. Liberal Dragoș Tudorache and social-democrat Brando Benifei have been spearheading the discussion on the AI Act for the civil rights and consumer protection committees of the European Parliament, respectively. "There are things that we agreed already, and they will be in the draft report, and things on which we think that we will agree, but because we haven't found right now the common denominator, we did not put them in the report," Tudorache said. "Our approach has been to make this regulation truly human-centric," Benifei told EURACTIV.
- Law > Civil Rights & Constitutional Law (0.36)
- Government > Regional Government (0.31)
Commission yearns for setting the global standard on artificial intelligence
The European Commission believes that its proposed Artificial Intelligence Act should become the global standard if it is to be fully effective. The upcoming AI treaty that is being drafted by the Council of Europe might help the EU achieve just that. In April the European Commission launched its proposal for an Artificial Intelligence Act (AIA). Structured around a risk-based approach, the regulation introduces tighter obligations in proportion to the potential impact of AI applications. Commissioner Thierry Breton argued that "one should not underestimate the advantage of the EU being the first mover" and emphasised that the EU is the main "pacemaker" in regulating the use of AI on a global scale. In a similar vein, the Commission's director-general for communications networks, content and technology, Roberto Viola said that "equilibrium is key to have a horizontal risk-based approach in which many voices are heard to avoid extremism and create rules that last.
- North America > United States (0.05)
- North America > Mexico (0.05)
- North America > Canada (0.05)
- (4 more...)
- Law (1.00)
- Government > Regional Government > Europe Government (0.74)
STOA meets its International Advisory Board to discuss the Artificial Intelligence Act
Written by Philip Boucher and Carl Pierer. The European Commission published the much-anticipated Artificial Intelligence Act (AIA), an ambitious cross-sectoral attempt to regulate artificial intelligence (AI) applications on 21 April 2021. Its aim is to ensure that all European citizens can trust AI by providing proportionate and flexible rules – harmonised across the single market – to address the specific risks posed by AI systems and set the highest standards worldwide. The proposal sets out a risk-based approach to regulating AI applications: those presenting an'unacceptable risk' would be banned, those presenting a'high-risk' would be subjected to additional requirements before entering the market, and others, such as chatbots and'deep fakes', would be subject to new transparency requirements. Applications presenting'low or minimal risk' – the vast majority of AI applications – could enter the market without restrictions, although voluntary codes of conduct may be developed. Other proposed measures include a European AI Board to monitor implementation and regulatory sandboxes to facilitate innovation.
- Law (1.00)
- Government > Regional Government (0.35)
The Ethics of AI in Europe
Recently we have seen in the media how artificial intelligence (AI) can help in the pandemic with many examples in the prediction and monitoring of it. We have also seen how it can contribute to improving the lives of visually impaired people, with applications such as Microsoft's SeeingAI or how it can also be a great ally for deaf people. AI has also been used to prevent bullying, as the startup WatsomApp does in Spain, and we find many examples where it is used in medical applications to help more efficient detection of tumors, or how it can contribute to the development of the autonomous car or sustainable agriculture, combined with IoT. In short, AI has become an essential technology in a multitude of industries, such as healthcare, banking, manufacturing or commerce, among others, being a great ally for people. The estimated global spending on AI is expected to exceed $50 billion this year and reach $110 billion by 2024 according to IDC.
- Information Technology (1.00)
- Health & Medicine (1.00)
- Food & Agriculture > Agriculture (0.90)